Market Roundup

September 6, 2002

 

 

 

Dell, SUNY-Buffalo Announce Linux-Based HPCC

HP and SAP Target SMBs in Europe

One Size Does Not Fit All, At Least Not for Software

Another Piece in the Security Puzzle

QLogic Drops InfiniBand Development

 

 

 

Dell, SUNY-Buffalo Announce Linux-Based HPCC

By Charles King

Dell and the State University of New York (SUNY) at Buffalo have announced what they described as one of the largest clusters of Linux servers ever deployed at a U.S. educational institution. The high-performance computing cluster (HPCC) is comprised of more than 2,000 Dell PowerEdge servers and data storage using a Dell/EMC SAN. The HPCC will be used by the Skolnick Group in the Buffalo Center of Excellence in Bioinformatics at the University for human genome research, bioinformatics, protein structure prediction, and large-scale computer simulations. The cluster consists of 2,000+ Dell PowerEdge 2650 and 1650 dual-processor servers running Red Hat Linux. Platform Computing’s LSF 5 workload management software is used to automate and manage complex computations across the server nodes. The cluster connects to a 16 terabyte Dell/EMC SAN that uses Extreme Networks’ BlackDiamond switches for Gigabit I/O connectivity between the nodes. Dell is providing a variety of services from custom configuration to installation to assist in the completion of the cluster.

First off, the new SUNY-Buffalo HPCC is an obvious win for the vendors involved, and likely is particularly gratifying to Dell’s financial well-being and prestige, and to EMC as an affirmation of its decision to partner closely with Dell. We believe Platform Computing is also likely to gain from the project’s high profile, further validating the company’s new LSF 5 workload management software for cluster and grid environments, and that Red Hat’s inclusion could further the company’s efforts to broaden the market penetration of its Linux solutions. All well and good for the vendors, but does that mean the cluster will be a slam dunk success for SUNY-Buffalo? From a purely technical standpoint, it should be. HPCC is a known quantity, and solutions have been sold, installed, and supported by vendors with extensive HPCC experience, including most of those listed in this announcement. If there is a question mark here, it is probably Dell. Though the cluster utilizes field-tested Dell technologies, we are far from sanguine about the company’s service/support capabilities. Dell has marched up the hardware food chain by leveraging supply chain acumen and brutally low prices into market leadership, but the company has yet to prove that it can deliver adequate, let alone the world class service and support required by the enterprise customers Dell so covets. Overall, we expect the SUNY-Buffalo project will test, for better or for worse, the mettle of Dell’s greater ambitions.

Beyond its implications for the vendors involved, we also see SUNY-Buffalo’s project as a further indicator of elemental changes happening in the high performance computing sector. In the past, HPC installations were the sole prerogative of Big Iron hardware vendors like IBM, Compaq, and HP. But while those vendors still maintain a commanding HPC presence, technological advances in microchip design, high performance networking, clustering, and grid computing have fundamentally altered the underpinnings of HPC environments. Now instead of the singular, mainframe-dominated locales of the past, HPC deployments are just as likely to be made up of numerous RISC- or Intel-based servers deployed across often widely-separated geographical locations. While this has changed the way HPC is used, it has also altered the audience for these solutions. While research facilities such as the Buffalo Center of Excellence in Bioinformatics will continue to be prime users of HPC, commercial and industrial deployments such as the IBM/GM HPC environment announced last week are on the rise. Overall, we expect the technological evolution of HPC will help to drive these solutions and the advantages they offer further and further out into the commercial sector, providing capabilities heretofore unavailable to even the largest of users.

 

HP and SAP Target SMBs in Europe

By Jim Balderston

HP and SAP this week announced an alliance designed to deliver SAP Business One to SMB customers in the EMEA market through HP’s reseller channel. The alliance will begin its efforts in Germany followed by Austria, Switzerland, the UK, and Benelux markets. HP will help SAP identify, qualify, pre-select, and manage local resellers, and join SAP in creating SAP Business One Centers in the EMEA markets to enable ongoing training and sales support. The two companies will jointly market the SAP Business One offerings. SAP Business One offers a range of business information applications, including general administration, financial accounting, sales and distribution, purchasing, warehouse management, and partner management.

In a number of ways, the HP/SAP initiative is not all that surprising, though in others it qualifies as at least intriguing. This deal makes sense for SAP, which has had less success in the SMB market largely because it traditional product set is widely — and correctly — viewed as being expensive, complex, and difficult to deploy. Unlike the large enterprises that typically have installed SAP systems, SMBs have far less tolerance for expensive, unwieldy product suites offering every conceivable function under the sun, and doing so in a fashion that makes end users prefer root canal work to actually using (or learning to use) the application at hand. Enter HP and its VARs. HP’s channel offers one viable, organized methodology for SAP to reach down market in a manner that could well resonate — at least initially — with SMBs. But the deal is anything but a slam dunk at this point. If the companies fail to leverage one another’s considerable experience in a concerted, persistent effort to reach into the SMB market, the initiative will likely be remembered as a failure whose whole was far less than the sum of its parts.

We suspect that SAP sees the initiative as an opportunity to generate SMB sales despite the company’s historical lack of traction in this market. Many SMBs use applications that provide much of the same user-facing functionality as SAP offerings, but without the robustness and complexity. Vendors like Sage have made a solid business out of serving SMBs with simplified offerings that offer many of SAP’s core functionalities. Perhaps SAP has decided the time has come to take back some of that market. At the same time, SMBs using Sage may see this as a good time to move to SAP for sake of business efficiency and future expandability. While we believe opportunities exist for HP and SAP to take advantage of these dynamics, to do so will take more than press releases and alliances. SAP has a well-earned history of offering products that in essence offer a very large hammer to drive SMBs’ relatively small nails. Reversing that perception by offering affordable, usable SMB alternatives will not happen overnight, and will require both HP and SAP to create and remain committed to a viable long-term SMB strategy.

 

One Size Does Not Fit All, At Least Not for Software

By Clay Ryder

Novell has announced a new software pricing program that establishes two additional classes of end-users and pricing. The new Business-to-Consumer and Government-to-Citizen user license pricing models are targeted at organizations such as businesses with many customers, and government agencies and their constituencies, who want to make Web services and applications broadly available to communities outside their organizations. This initiative does not affect the Novell standard user license, which typically applies to the employees, suppliers, etc. of a business. The Business-to-Consumer user license price is 25% of the standard user license, and the Government-to-Citizen user license is 10% of the standard price. In all cases, licensing charges are based on the number of individual users of a software service, not the number of computing devices connected to the network. In an unrelated announcement, Microsoft announced Microsoft Works Suite 2003, its home productivity suite, which includes the latest versions of Microsoft Works, Word, Encarta, Money, Picture It! Photo, and Streets & Trips. In addition, the product includes the new Task Launcher Home with a calendar and to-do list designed to track family schedules, and My Project Organizer, which integrates all six applications in the Works Suite. Works Suite 2003 is available for an estimated retail price of $109 with a $15 mail-in rebate coupon.

While there has been much hemming and hawing about software licensing fees as of late, the issue has long been a matter of debate for two reasons. First, licenses provide income to those issuing them, and second, license fees implicitly propose a valuation on the use of said software. From a manufacturing perspective, one could argue that bigger software should cost more, since after all it takes more resources to create it. But from a user’s perspective, big software where only a fraction of its capabilities address a user’s actual needs may represent an underutilized or overpriced investment. This notion of value pricing is hardly new or unique, but in the world of software, it has rarely been uniformly practiced.

Consider these two seemingly unrelated announcements from Novell and Microsoft. Novell clearly recognizes that an occasional user not directly affiliated with an organization would likely derive less value from a given set of software than would an employee. By lowering the fee for this class of user, Novell is aligning its revenue more along the lines of the value it delivers. In a different vein, Microsoft’s latest version of Works delivers home productivity software at a price that is substantially lower than its business-oriented cousin, Office XP. But while Works is targeted for the home user, its word processor is the same as in Office XP, and the suite is priced lower than a standalone copy of Word. In addition, Works offers several similar, but not identical capabilities as the low-end version of Office. While Works is clearly trying to protect corporate price points for Office, it does so in a way that is inherently conflicted, since nothing would stop businesses from buying Works solely to get Word, arguably the most sought after and broad reaching office productivity application. Thus, the Microsoft approach is to provide different value to the user based upon total product capability (vendor value) as opposed to user-derived value. While this seems consistent with the company’s recent corporate software pricing initiatives, it contradicts Microsoft’s historic focus on the individual that is manifest in its catch phrase, “Where do you want to go today?” As a result, in its attempt to devise and deliver different levels of product value to its users, Microsoft has created a pricing conflict for one of its most popular products. Conversely, Novell’s approach, which recognizes that software is not equally useful to all users, avoids such conflict. Given the current state of the market, we find it reassuring that a leading software vendor is looking to align its business interests with those of its customers. Equally disquieting is the fact that THE leading software vendor seems determined to further distance itself from its customers’ interests — a notable gaff that in the short term might provide additional revenue, but at the likely price of long-term alienation of its customer base.

 

Another Piece in the Security Puzzle

By Jim Balderston

IBM has announced it is acquiring Access360, a privately held maker of identity management software based in Irvine, CA. Terms of the deal were not released, and it is expected to close in the fourth quarter of this year. Access360 provides software that allows enterprise to consolidate identity data which in turn allows for automation of employee, contractor, business partner, and customer access privileges on an enterprise network. The Access360 technology will be folded into the Tivoli software portfolio as part of the IBM Software Group.

This latest addition to the IBM security portfolio continues a path IBM has been following for quite some time. IBM long ago recognized that IT security is much more complex than a binary in/out function and is inextricably bound to each application. To achieve the level of security that enterprises require, security must be much more granular and specific than merely walling off parts of the enterprise network. With heterogeneous environments, merged enterprise networks, multiple layers of partners, customers, employees, and increasingly fragmented access points, the need for security technologies that attach directly to individual users could not be greater. Identity management, when manually applied on a piecemeal basis to an enterprise environment, creates a quagmire of repetitive tasks simply to add or delete a user. By automating and centralizing identity management, users can be easily added or deleted, while ensuring the correct authority is provided to the enterprise network and the applications that users need to do their jobs. The need for such fine-grained identity control will not decrease anytime soon, with new technologies like Web Services and Grid Computing adding to the range of systems and applications that an enterprise user can traverse. Identity management is only one piece of the larger security puzzle, yet IBM’s acquisition of Access360 indicates that is sticking with its strategy of providing a comprehensive set of interlocking and complementary security technologies that promise higher degrees of safety and ease of administration than the security point products that require hand tuning on a per-system, -application, or -network basis. Of course, none of this will have a great impact on real enterprise IT safety until security becomes as common and simple to use as a browser.

 

QLogic Drops InfiniBand Development

By Charles King

During a keynote speech at a Salomon Smith Barney technology conference in New York City this week, QLogic CEO and Chairman H.K. Desai announced that the company would shelve its InfiniBand switch silicon due to uncertainties about the market for InfiniBand solutions. Desai pointed out that the extra help needed in qualification processes around new technologies, and said the company’s support for investment protection. In contrast to its InfiniBand decision, Desai said QLogic remains firmly committed to the iSCSI market and will increase its investment in that area.

The past six months have been a tough time to be in the InfiniBand business. After being touted as a technology that offered to fundamentally change datacenter computing, heavy-hitting InfiniBand supporters Intel and Microsoft decided to leave the market to other vendors, and were followed to the exits by players including Banderacom, Mellanox, and now QLogic. Does the withdrawal of Intel and Microsoft mean InfiniBand is finis, or are other market forces at work here? First, it should be remembered that despite the media noise generated around InfiniBand the technology has an extremely narrow range of impact. Basically, InfiniBand is an I/O-focused solution that bypasses the limitations of PCI Bus technology to dramatically speed data movement between servers. But the technology also carries shortcomings including significant range limitations, making it largely useless outside of datacenter environments. While Intel and Microsoft may have abandoned ship, datacenter vendors including IBM, HP, Sun, and Dell remain publicly committed to developing InfiniBand-based solutions. In a sense, Intel and Microsoft’s decisions make perfect tactical sense. Despite protestations to the contrary, Microsoft’s datacenter efforts to date have largely been confined to smaller installations that might not be significantly enhanced by InfiniBand. Intel’s decision cuts two ways. First, the company’s current datacenter efforts largely mirror Microsoft’s, though that will likely change as the company’s 64-bit Itanium solutions come to the fore. Additionally, the company’s PCI Express (formerly 3GIO) technology provides an alternative to InfiniBand over which Intel will maintain greater developmental control.

On a more practical side, the ongoing soft economy makes InfiniBand’s narrow usability and glacial development cycles a particularly tough sell, especially for smaller vendors like QLogic. The fact is that the current drivers for enterprise technology sales are reducing staff and increasing efficiency, both of which are only tangentially affected by InfiniBand’s current capabilities. With money tight and sales tough to find, small vendors will primarily focus their efforts on low hanging fruit and hope they find enough to survive. This situation is unlikely to change until the big iron vendors begin shipping InfiniBand-capable servers and storage products, at which point we expect to see the smaller players jump back in. But it will also leave the lion’s share of InfiniBand development and influence in the hands of large vendors that have the financial wherewithal to focus on InfiniBand’s long-term strategic potential. We also see one possibly troubling blip on this somewhat stormy horizon. Intel’s decision to board the PCI Express is likely fueled by the company’s assumption that it can gain traction and control of the datacenter network as its 64-bit products become widely adopted. We remember well what can happen when competing technologies butt heads (Can you say, “BetaMax vs. VHS,” boys and girls?) and wonder if we are witnessing the genesis of what could eventually become a PCI Express/InfiniBand train wreck further down the tracks.

 

 

The Sageza Group, Inc.

836 W El Camino Real

Mountain View, CA 94040-2512

650·390·0700     fax 650·649·2302

London +44 (0) 20·7900·2819

Munich +49 (0) 89·4201·7144

Amsterdam +31 (0) 35·588·1546

 

sageza.com

 

Copyright © 2002 The Sageza Group, Inc.

May not be duplicated or retransmitted without written permission